83 research outputs found

    MCMC and variational approaches for Bayesian inversion in diffraction imaging

    No full text
    International audienceThe term “diffraction imaging” is meant, herein, in the sense of an “inverse scattering problem” where the goal is to build up an image of an unknown object from measurements of the scattered field that results from its interaction with a known probing wave. This type of problem occurs in many imaging and non-destructive testing applications. It corresponds to the situation where looking for a good trade-off between the image resolution and the penetration of the incident wave in the probed medium, leads to choosing the frequency of the latter in such a way that its wavelength lies in the “resonance” domain, in the sense that it is approximately of the same order of magnitude as the characteristic dimensions of the inhomogeneities of the inspected object. In this situation the wave-object interaction gives rise to important diffraction phenomena. This is the case for the two applications considered herein, where the interrogating waves are electromagnetic waves with wavelengths in the microwave and optical domains, whereas the characteristic dimensions of the sought object are 1 cm and 1 μm, respectively.The solution of an inverse problem obviously requires previous construction of a forward model that expresses the scattered field as a function of the parameters of the sought object. In this model, diffraction phenomena are taken into account by means of domain integral representations of the electric fields. The forward model is then described by two coupled integral equations, whose discrete versions are obtained using a method of moments and whose inversion leads to a non-linear problem.Concerning inversion, at the beginning of the 1980s, accounting for the diffraction phenomena has been the subject of much attention in the field of acoustic imaging for applications in geophysics, non-destructive testing or biomedical imaging. It led to techniques such as diffraction tomography, a term that denotes “applications that employs diffracting wavefields in the tomographic reconstruction process” , but which generally implies reconstruction processes based on the generalized projection-slice theorem, an extension to the diffraction case of the projection-slice theorem of the classical computed tomography whose forward model is given by a Radon transform . This theorem is based upon first- order linearizing assumptions such as the Born’s or Rytov’s approximations. So, the term diffraction tomography was paradoxically used to describe reconstruction techniques adapted to weakly scattering environments that do not provide quantitative information on highly contrasted dielectric objects such as those encountered in the applications considered herein, where multiple diffraction cannot be ignored.Furthermore, the resolution of these techniques is limited because evanescent waves are not taken into consideration. These limitations have led researchers to develop inversion algorithms able to deal with non-linear problems, at the beginning of the 1990s for microwave imaging and more recently for optical imaging. Many studies have focused on the development of deterministic methods, such as the Newton-Kantorovich algorithm, the modified gradient method (MGM) or the contrast-source inversion technique (CSI), where the solution is sought for by means of an iterative minimization by a gradient method of a cost functional that expresses the difference between the scattered field and the estimated model output. But, in addition to be non-linear, inverse scattering problems are also known to be ill-posed, which means that their resolution requires a regularization which generally consists in introducing prior information on the sought object. In the present case, for example, we look for man-made objects that are composed of homogeneous and compact regions made of a finite number of different materials, and with the aforementioned deterministic methods, it is not easy to take into account such prior information because it must be introduced into the cost functional to be minimized.On the contrary, the probabilistic framework of Bayesian estimation, basis of the model presented herein, is especially well suited for this situation. Prior information is appropriately introduced via a probabilistic Gauss-Markov-Potts model. The marginal contrast distribution is modeled as a mixture of Gaussians, where each Gaussian distribution represents a class of materials and the compactness of the regions is taken into account using a hidden Markov model. Estimation of the unknowns and parameters introduced into the prior model is performed via an unsupervised joint approach.Two iterative algorithms are proposed. The first one, denoted as the MCMC algorithm (Monte-Carlo Markov Chain), is rather classic ; it consists in expressing all the joint posterior or conditional distributions of all the unknowns and, then, using a Gibbs sampling algorithm for estimating the posterior mean of the unknowns. This algorithm yields good results, however, it is computationally intensive mainly because Gibbs sampling requires a significant number of samples.The second algorithm is based upon the variational Bayesian approximation (VBA). The latter was first introduced in the field of Bayesian inference for applications to neural networks, learning graphic models and model parameter estimation. Its appearance in the field of inverse problems is relatively recent, starting with source separation and image restoration. It consists in approximating the joint posterior distribution of all the unknowns by a free-form separable distribution that minimizes, with respect to the posterior law, the Kullback-Leibler divergence which has interesting properties for optimization and leads to an implicit parametric optimization scheme. Once the approximate distribution is built up, the estimator can be easily obtained.A solution to this functional optimization problem can be found in terms of exponential distributions whose shape parameters are estimated iteratively. It can be noted that, at each iteration, the updating expression for these parameters is similar to the one that could be obtained if a gradient method was used to solve the optimization problem. Moreover, the gradient and the step size have an interpretation in terms of statistical moments (means, variances, etc.).Both algorithms introduced herein are applied to two quite different configurations. The one related to microwave imaging is quasi-optimal: data are quasi-complete and frequency diverse. This means that the scattered fields are measured all around the object for several directions of illumination and several frequencies. The configuration used in optical imaging is less favorable since only aspect-limited data are available at a single frequency. This means that illuminations and measurements can only be performed in a limited angular sector. This limited aspect reinforces the ill-posedness of the inverse problem and makes essential the introduction of prior information. However, it will be shown that, in both cases, satisfactory results are obtained

    Contrôle des erreurs pour la détection d'événements rares et faibles dans des champs de données massifs

    No full text
    National audienceIn this paper, we address the general issue of detecting rare and weak signatures in very noisy data. Multiple hypotheses testing approaches can be used to extract a list of components of the data that are likely to be contaminated by a source while controlling a global error criterion. However most of efficients methods available in the literature stand for independent tests, or require specific dependency hypotheses. Based on the work of Benjamini and Yekutieli [1], we show that under some classical positivity assumptions, the Benjamini-Hochberg procedure for False Discovery Rate (FDR) [2] control can be directly applied to the statistics produced by a very common tool in signal and image processing that introduces dependency: the matched filter.Nous nous intéressons à la détection d'événements rares et de faible intensité dans des données massives bruitées. Les approches par tests multiples d'hypothèses peuvent être utilisées pour extraire une liste d'échantillons susceptibles de contenir de l'information tout en contrôlant un critère d'erreurs de détection global. Dans la littérature, la plupart de ces approches ne sont valides que pour des tests indépendants, ou sous des hypothèses particulières de dépendance. Nous nous proposons de montrer, en étendant les travaux de Benjamini et Yekutieli [1], que sous certaines hypothèses, il est cependant possible d'appliquer la procédure de contrôle du taux de fausses découverte (FDR) de Benjamini-Hochberg [2] sur une statistique de filtrage adapté très utilisée en traitement du signal et des images

    Nonparametric Bayesian extraction of object configurations in massive data

    No full text
    International audienceThis study presents an unsupervised method for detection of configurations of objects based on a point process in a nonparametric Bayesian framework. This is of interest as the model presented here has a number of parameters that increases with the number of objects detected. The marked point process yields a natural sparse representation of the object configuration, even in massive data fields. However, Bayesian methods can lead to the evaluation of some densities that raise computational issues, due to the huge number of detected objects. We have developed an iterative update of these densities when changes in the object configurations are made, which allows the computational cost to be reduced. The performance of the proposed algorithm is illustrated on synthetic data and very challenging quasi-real hyperspectral data for young galaxy detection

    Variational Bayesian inversion for microwave breast imaging

    No full text
    International audienceMicrowave imaging is considered as a nonlinear inverse scattering problem and tackled in a Bayesian estimation framework. The object under test (a breast affected by a tumor) is assumed to be composed of compact regions made of a restricted number of different homogeneous materials. This a priori knowledge is defined by a Gauss-Markov-Potts distribution. First, we express the joint posterior of all the unknowns; then, we present in detail the variational Bayesian approximation used to compute the estimators and reconstruct both permittivity and conductivity maps. This approximation consists of the best separable probability law that approximates the true posterior distribution in the Kullback-Leibler sense. This leads to an implicit parametric optimization scheme which is solved iteratively. Some preliminary results, obtained by applying the proposed method to synthetic data, are presented and compared with those obtained by means of the classical contrast source inversion method

    Approche variationnelle bayésienne pour la reconstruction tomographique

    Get PDF
    National audienceLe cadre de l'inférence bayésienne fournit un outil important pour la résolution des problèmes inverses par la modélisation probabiliste de tous les paramètres inconnus. Cependant, à part des modèles a priori simples, le calcul bayésien de la solution optimale est complexe. Par conséquent, le coût de calcul augmente significativement rendant la solution bayésienne peu utilisable en pratique. Pour cela, deux classes de méthodes qui approchent la loi a posteriori ont été utilisées: analytique comme l'approximation de Laplace et numérique comme les méthodes d'échantillonnage MCMC. Dans ce papier, nous appliquons l'inférence bayésienne dans un problème de reconstruction tomographique. Dans ce but, nous proposons un champ de Gauss-Markov pour la distribution d'intensité avec un champ de Potts Markov caché pour la classe du matériau. Le modèle de l'a priori est alors un modèle de Gauss-Markov-Potts hiérarchique. La plupart des paramètres du modèle sont inconnus et nous voulons les évaluer conjointement avec l'objet d'intérêt. En utilisant l'approche bayésienne variationnelle, la loi a posteriori jointe est approchée par un produit de lois marginales à partir desquelles les équations des paramètres de forme sont déduits. Nous présentons l'application de cette approche en reconstruction tomographique et nous discutons du coût de calcul et de la qualité de cette estimation

    Microwave tomography for breast cancer detection within a Variational Bayesian Approach

    No full text
    International audienceWe consider a nonlinear inverse scattering problem where the goal is to detect breast cancer from measurements of the scattered field that results from its interaction with a known wave in the microwave frequency range. The modeling of the wave-object interaction is tackled through a domain integral representation of the electric field in a 2D-TM configuration. The inverse problem is solved in a Bayesian framework where the prior information is introduced via a Gauss-Markov-Potts model. A Variational Bayesian Approximation (VBA) technique is adapted to complex valued contrast and applied to compute the posterior estimators and reconstruct maps of both the permittivity and conductivity. Results obtained by means of this approach from synthetic data are compared with those given by a deterministic contrast source inversion method

    Variational Bayesian inversion for microwave breast imaging

    No full text
    International audienceMicrowave imaging is considered as a nonlinear inverse scattering problem and tackled in a Bayesian estimation framework. The object under test (a breast affected by a tumor) is assumed to be composed of compact regions made of a restricted number of different homogeneous materials. This a priori knowledge is defined by a Gauss-Markov-Potts distribution. First, we express the joint posterior of all the unknowns; then, we present in detail the variational Bayesian approximation used to compute the estimators and reconstruct both permittivity and conductivity maps. This approximation consists of the best separable probability law that approximates the true posterior distribution in the Kullback-Leibler sense. This leads to an implicit parametric optimization scheme which is solved iteratively. Some preliminary results, obtained by applying the proposed method to synthetic data, are presented and compared with those obtained by means of the classical contrast source inversion method

    Error control for the detection of rare and weak signatures in massive data

    No full text
    International audienceIn this paper, we address the general issue of detecting rare and weak signatures in very noisy data. Multiple hypotheses testing approaches can be used to extract a list of components of the data that are likely to be contaminated by a source while controlling a global error criterion. However most of efficients methods available in the literature are derived for independent tests. Based on the work of Benjamini and Yekutieli [1], we show that under some classical positiv-ity assumptions, the Benjamini-Hochberg procedure for False Discovery Rate (FDR) control can be directly applied to the result produced by a very common tool in signal and image processing: the matched filter. This shows that despite the dependency structure between the components of the matched filter output, the Benjamini-Hochberg procedure still guarantee the FDR control. This is illustrated on both synthetic and real data

    SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    Get PDF
    International audienceWe present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile

    Physical structure of the photodissociation regions in NGC 7023: Observations of gas and dust emission with <i>Herschel</i>

    Get PDF
    The determination of the physical conditions in molecular clouds is a key step towards our understanding of their formation and evolution of associated star formation. We investigate the density, temperature, and column density of both dust and gas in the photodissociation regions (PDRs) located at the interface between the atomic and cold molecular gas of the NGC 7023 reflection nebula. We study how young stars affect the gas and dust in their environment. Our approach combining both dust and gas delivers strong constraints on the physical conditions of the PDRs. We find dense and warm molecular gas of high column density in the PDRs
    corecore